78 research outputs found

    Face Detection & Recognition based on Fusion of Omnidirectional & PTZ Vision Sensors and Heteregenous Database

    Get PDF
    International audienceLarge field of view with high resolution has always been sought-after for Mobile Robotic Authentication. So the Vision System proposed here is composed of a catadioptric sensor for full range monitoring and a Pan Tilt Zoom (PTZ) camera together forming an innovative sensor, able to detect and track any moving objects at a higher zoom level. In our application, the catadioptric sensor is calibrated and used to detect and track Regions Of Iinterest (ROIs) within its 360 degree Field Of View (FOV), especially face regions. Using a joint calibration strategy, the PTZ camera parameters are automatically adjusted by the system in order to detect and track the face ROI within a higher resolution and project the same in faces-pace for recognition via Eigenface algorithm. Face recognition is one important task in Nomad Biometric Authentication (NOBA 1) project. However, as many other face databases, it will easily produce the Small Sample Size (SSS) problem in some applications with NOBA data. Thus this journal uses the compressed sensing (CS) algorithm to solve the SSS problem in NOBA face database. Some experiments can prove the feasibility and validity of this solution. The whole development has been partially validated by application to the Face recognition using our own database NOBA

    REAL TIME PEDESTRIAN DETECTION-BASED FASTER HOG/DPM AND DEEP LEARNING APPROACH

    Get PDF
    International audienceThe work presented aims to show the feasibility of scientific and technological concepts in embedded vision dedicated to the extraction of image characteristics allowing the detection and the recognition/localization of objects. Object and pedestrian detection are carried out by two methods: 1. Classical image processing approach, which are improved with Histogram Oriented Gradient (HOG) and Deformable Part Model (DPM) based detection and pattern recognition. We present how we have improved the HOG/DPM approach to make pedestrian detection as a real time task by reducing calculation time. The developed approach allows us not only a pedestrian detection but also calculates the distance between pedestrians and vehicle. 2. Pedestrian detection based Artificial Intelligence (AI) approaches such as Deep Learning (DL). This work has first been validated on a closed circuit and subsequently under real traffic conditions through mobile platforms (mobile robot, drone and vehicles). Several tests have been carried out in the city center of Rouen in order to validate the platform developed

    Real Time Object Detection, Tracking, and Distance and Motion Estimation based on Deep Learning: Application to Smart Mobility

    Get PDF
    International audienceIn this paper, we will introduce our object detection, localization and tracking system for smart mobility applications like traffic road and railway environment. Firstly, an object detection and tracking approach was firstly carried out within two deep learning approaches: You Only Look Once (YOLO) V3 and Single Shot Detector (SSD). A comparison between the two methods allows us to identify their applicability in the traffic environment. Both the performances in road and in railway environments were evaluated. Secondly, object distance estimation based on Monodepth algorithm was developed. This model is trained on stereo images dataset but its inference uses monocular images. As the output data, we have a disparity map that we combine with the output of object detection. To validate our approach, we have tested two models with different backbones including VGG and ResNet used with two datasets : Cityscape and KITTI. As the last step of our approach, we have developed a new method-based SSD to analyse the behavior of pedestrian and vehicle by tracking their movements even in case of no detection on some images of a sequence. We have developed an algorithm based on the coordinates of the output bounding boxes of the SSD algorithm. The objective is to determine if the trajectory of a pedestrian or vehicle can lead to a dangerous situations. The whole of development is tested in real vehicle traffic conditions in Rouen city center, and with videos taken by embedded cameras along the Rouen tramway

    ADAPT Project Publications Booklet

    Get PDF

    TRAJECTOGRAPHY ESTIMATION FOR A SMART POWERED WHEELCHAIR ORB-SLAM2 VS RTAB-MAP A PREPRINT

    Get PDF
    International audienceThis work is part of the ADAPT project relating to the implementation of a trajectography functionality that aims to measure the path travelled by a patient during the clinical trials. This system (hardware and software) must be reliable, quickly integrable, low-cost and real-time. Therefore, our choices have been naturally made towards visual SLAM-based solutions coupled with an Intel real-sense consumer sensors. This paper is a comparison of two well-known visual SLAM algorithms in the scientific community, ORB-SLAM2 and RTAB-Map, evaluated in different path configurations. The added value of our work lies in the accurate estimation of the trajectories achieved through the use of a VICON motion capture system

    Controlled extraction of image features and 3D automation of 3D reconstruction,Application to vision-based dimensional measurement

    No full text
    La mise en œuvre d'un système complet de reconstruction 3D d'objets quasi-polyédriques par vision par ordinateur, par exemple en vue d'une inspection et d'un contrôle qualité, nécessite de coordonner un ensemble de processus complexes permettant l'acquisition et l'analyse des données image, leur évaluation dimensionnelle ainsi que leur comparaison avec un modèle de référence. Notre contribution porte sur le développement d'un système de vision reposant sur une gestion automatique des traitements permettant une analyse dimensionnelle complète d'une pièce manufacturée incluant des surfaces gauches, et autorisant le contrôle de têtes de mesure variées. Il met à profit des connaissances conceptuelles a priori portant non seulement sur l'objet à analyser, mais également sur son environnement, et s'articule autour de la coopération de deux modules distincts. Le premier module, organisant et contrôlant le déroulement de l'application, repose sur une synthèse des connaissances a priori liées au modèle de l'objet ainsi qu'au système d'acquisition. Plus spécifiquement, la séquence de traitements est élaborée hors-ligne et planifie la reconstruction 3D complète de l'objet et son évaluation. Le second exploite le formalisme des arbres de graphes de situation et réalise, en ligne, les opérations d'acquisition et de reconstruction partielle planifiées. Il est également capable de s'adapter dynamiquement aux conditions réelles de prise de vue et de modifier automatiquement les conditions effectives d'acquisition et de traitement des données. Enfin, dans le but d'automatiser complètement la chaîne de traitements, une procédure de détourage de l'objet du fond de l'image a été développée. Afin d'éviter tout post-traitement, une approche de segmentation par contours actifs a été privilégiée et une méthode de réglage des paramètres par plans d'expériences a été adoptée. L'ensemble des développements a été partiellement validé par application à l'évaluation d'une pale de turbine.The set-up of a fulll 3D reconstruction vision-based system for quasi-polyhedral objects, for example in view of inspection and quality control tasks, requires the coordination of a set of complex processes allowing acquisition and analysis of image data, their dimensional evaluation, as well as their comparison with some reference model. Our contribution concerns the development of a computer vision system relying on an automated management of the processing allowing the full dimensional analysis of manufactured part including fre-form surfaces, and enabling the control of various measuring heads. It makes use of a priori conceptual knowledge not only related to the object to be analyzed, but also to its environment, and is articulated around the cooperation of two distinct modules. The first module, organizing and controlling the application execution, relies on the synthesis of a priori knowledge related to the object model, as well as to the acquisition system. More specifically, the processing sequence is elaborated off-line and plans the full 3D reconstruction of the object and its evaluation. The second exploits the Situation Graph Trees formalism and carries out, on line, the planned acquisition and partial reconstruction operations. It is also able to dynamically adapt its behaviour to the actual acquisition conditions and to modify automatically the real acquisition and data processing conditions. Lastly, with the aim to fully automate the data processing chain, a procedure for object delineation from the image background has been developed. In order to avoid any post-processing, a segmentation approach using active contours has been privileged and a parameter adjustement method using design of experiments has been adopted. The whole of the developments has been partially validated by application to the evaluation of a turbine blade

    Extraction contrôlée d’indices images et automatisation de la reconstruction 3D : application à la mesure dimensionnelle par vision par ordinateur

    No full text
    La mise en œuvre d'un système complet de reconstruction 3D d’objets quasi-polyédriques par vision par ordinateur, par exemple en vue d’une inspection et d’un contrôle qualité, nécessite de coordonner un ensemble de processus complexes permettant l'acquisition et l’analyse des données image, leur évaluation dimensionnelle ainsi que leur comparaison avec un modèle de référence. Notre contribution porte sur le développement d'un système de vision reposant sur une gestion automatique des traitements permettant une analyse dimensionnelle complète d'une pièce manufacturée incluant des surfaces gauches, et autorisant le contrôle de têtes de mesure variées. Il met à profit des connaissances conceptuelles a priori portant non seulement sur l'objet à analyser, mais également sur son environnement, et s'articule autour de la coopération de deux modules distincts. Le premier module, organisant et contrôlant le déroulement de l’application, repose sur une synthèse des connaissances a priori liées au modèle de l'objet ainsi qu'au système d'acquisition. Plus spécifiquement, la séquence de traitements est élaborée hors-ligne et planifie la reconstruction 3D complète de l'objet et son évaluation. Le second exploite le formalisme des arbres de graphes de situation et réalise, en ligne, les opérations d'acquisition et de reconstruction partielle planifiées. Il est également capable de s'adapter dynamiquement aux conditions réelles de prise de vue et de modifier automatiquement les conditions effectives d'acquisition et de traitement des données. Enfin, dans le but d'automatiser complètement la chaîne de traitements, une procédure de détourage de l'objet du fond de l'image a été développée. Afin d'éviter tout post-traitement, une approche de segmentation par contours actifs a été privilégiée et une méthode de réglage des paramètres par plans d'expériences a été adoptée. L’ensemble des développements a été partiellement validé par application à l’évaluation d’une pale de turbine. The set-up of a fulll 3D reconstruction vision-based system for quasi-polyhedral objects, for example in view of inspection and quality control tasks, requires the coordination of a set of complex processes allowing acquisition and analysis of image data, their dimensional evaluation, as well as their comparison with some reference model. Our contribution concerns the development of a computer vision system relying on an automated management of the processing allowing the full dimensional analysis of manufactured part including free-form surfaces, and enabling the control of various measuring heads. It makes use of a priori conceptual knowledge not only related to the object to be analyzed, but also to its environment, and is articulated around the cooperation of two distinct modules. The first module, organizing and controlling the application execution, relies on the synthesis of a priori knowledge related to the object model, as well as to the acquisition system. More specifically, the processing sequence is elaborated off-line and plans the full 3D reconstruction of the object and its evaluation. The second exploits the Situation Graph Trees formalism and carries out, on line, the planned acquisition and partial reconstruction operations. It is also able to dynamically adapt its behaviour to the actual acquisition conditions and to modify automatically the real acquisition and data processing conditions. Lastly, with the aim to fully automate the data processing chain, a procedure for object delineation from the image background has been developed. In order to avoid any post-processing, a segmentation approach using active contours has been privileged and a parameter adjustement method using design of experiments has been adopted. The whole of the developments has been partially validated by application to the evaluation of a turbine blade

    Controlled extraction of image features and 3D automation of 3D reconstruction,Application to vision-based dimensional measurement

    No full text
    La mise en œuvre d'un système complet de reconstruction 3D d'objets quasi-polyédriques par vision par ordinateur, par exemple en vue d'une inspection et d'un contrôle qualité, nécessite de coordonner un ensemble de processus complexes permettant l'acquisiThe set-up of a fulll 3D reconstruction vision-based system for quasi-polyhedral objects, for example in view of inspection and quality control tasks, requires the coordination of a set of complex processes allowing acquisition and analysis of image data

    Contribution à la Perception d’Environnement pour la Smart Mobilité

    No full text
    Autonomous vehicles are increasingly present in our daily lives, opening new perspectives for smart mobility. An autonomous vehicle must include three essential functions: perception, decision, and action. The more the system can perceive its environment, the better decisions it will make, allowing it to trigger high-quality actions that meet safety, comfort, and energy requirements. Object detection, localization, and tracking are essential tasks for environment perception. Since 2012, deep learning has become a very powerful tool due to its ability to process large amounts of data. The emergence of methods based on deep learning has led to significant progress. Despite this AI craze, few methods focus on the real-time aspect that is essential for real applications due to the high computational costs. In addition, these algorithms have obvious shortcomings in complex scenes due in part to the lack of ground-truth data, as is the case for railway and healthcare smart mobility. In addition to accuracy and speed, perception algorithms must take into account the energy constraint related to embedded systems. My research work is concerned with this issue and therefore focuses on the environment perception of two smart mobility fields: road/railway and mobile robotics/healthcare. The objective is to reach a level of analysis and understanding of complex scenes allowing to ensure smart mobility of a very high level of safety, comfort, and optimal energy. This requires two essential and complementary axis: 1. a multi-sensor fusion system to further enrich perception with heterogeneous data, 2. AI-based environment perception to exploit the data collected for better prediction of all situations. It is therefore the development of open generic platforms to experiment and validate technological and scientific concepts for the academic and industrial world.Les véhicules autonomes sont de plus en plus présents dans notre quotidien, ouvrant de nouvelles perspectives pour la smart mobilité. Un véhicule autonome doit comprendre 3 fonctions essentielles : perception, décision et actions. Plus le système est capable de percevoir son environnement, plus il prendra de meilleures décisions lui permettant, in fine, de déclencher des actions répondant aux exigences de sécurité, confort et d'énergie. La détection, localisation et tracking d'objets sont des tâches indispensables pour la perception. Depuis 2012, le deep learning est devenu un outil très puissant en raison de sa capacité à traiter de grandes quantités de données. L'apparition de nombreuses méthodes basées sur l'apprentissage profond a conduit à des progrès significatifs. Malgré cet engouement à l’IA, peu de méthodes se concentrent sur l'aspect temps-réel, essentiel pour les applications réelles et ce, en raison des coûts de calcul élevés. En plus, ces algorithmes présentent des lacunes évidentes dans les scènes complexes en partie à cause du manque de données vérité terrain comme pour la smart mobilité ferroviaire et la santé. En plus de la précision et la vitesse, les algorithmes de perception doivent prendre en compte la contrainte d’énergie liée aux systèmes embarqués. Mes travaux de recherche sont concernés par cette problématique et se concentrent donc sur la perception d’environnement pour deux domaines de la smart mobilité : routier/ferroviaire et robotique mobile/santé. L'objectif est d'atteindre un niveau d'analyse et de compréhension de scènes complexes permettant d'assurer une smart mobilité de très haut niveau de sécurité, de confort et d’énergie optimale. Cela repose sur deux briques essentielles et complémentaires : 1. Système fusion multicapteurs permettant d'enrichir davantage la perception avec des données hétérogènes, 2. Perception d'environnement basée IA permettant l'exploitation des données collectées pour une meilleure prédiction de l'ensemble des situations. Il s'agit donc du développement de plateformes génériques ouvertes pour expérimenter et valider des concepts technologiques et scientifiques du monde académique et industriel
    • …
    corecore